肺癌治疗中有针对性疗法的标准诊断程序涉及组织学亚型和随后检测关键驱动因素突变,例如EGFR。即使分子分析可以发现驱动器突变,但该过程通常很昂贵且耗时。深度学习的图像分析为直接从整个幻灯片图像(WSIS)直接发现驱动器突变提供了一种更经济的替代方法。在这项工作中,我们使用具有弱监督的自定义深度学习管道来鉴定苏木精和曙红染色的WSI的EGFR突变的形态相关性,此外还可以检测到肿瘤和组织学亚型。我们通过对两个肺癌数据集进行严格的实验和消融研究来证明管道的有效性-TCGA和来自印度的私人数据集。通过管道,我们在肿瘤检测下达到了曲线(AUC)的平均面积(AUC),在TCGA数据集上的腺癌和鳞状细胞癌之间的组织学亚型为0.942。对于EGFR检测,我们在TCGA数据集上的平均AUC为0.864,印度数据集的平均AUC为0.783。我们的关键学习点包括以下内容。首先,如果要在目标数据集中微调特征提取器,则使用对组织学训练的特征提取器层没有特别的优势。其次,选择具有较高细胞的斑块,大概是捕获肿瘤区域,并不总是有帮助的,因为疾病类别的迹象可能存在于肿瘤 - 肿瘤的基质中。
translated by 谷歌翻译
Pictionary, the popular sketch-based guessing game, provides an opportunity to analyze shared goal cooperative game play in restricted communication settings. However, some players occasionally draw atypical sketch content. While such content is occasionally relevant in the game context, it sometimes represents a rule violation and impairs the game experience. To address such situations in a timely and scalable manner, we introduce DrawMon, a novel distributed framework for automatic detection of atypical sketch content in concurrently occurring Pictionary game sessions. We build specialized online interfaces to collect game session data and annotate atypical sketch content, resulting in AtyPict, the first ever atypical sketch content dataset. We use AtyPict to train CanvasNet, a deep neural atypical content detection network. We utilize CanvasNet as a core component of DrawMon. Our analysis of post deployment game session data indicates DrawMon's effectiveness for scalable monitoring and atypical sketch content detection. Beyond Pictionary, our contributions also serve as a design guide for customized atypical content response systems involving shared and interactive whiteboards. Code and datasets are available at https://drawm0n.github.io.
translated by 谷歌翻译
越来越多的工作已经认识到利用机器学习(ML)进步的重要性,以满足提取访问控制属性,策略挖掘,策略验证,访问决策等有效自动化的需求。在这项工作中,我们调查和总结了各种ML解决不同访问控制问题的方法。我们提出了ML模型在访问控制域中应用的新分类学。我们重点介绍当前的局限性和公开挑战,例如缺乏公共现实世界数据集,基于ML的访问控制系统的管理,了解黑盒ML模型的决策等,并列举未来的研究方向。
translated by 谷歌翻译
Natural Language Generation (NLG) represents a large collection of tasks in the field of NLP. While many of these tasks have been tackled well by the cross-entropy (CE) loss, the task of dialog generation poses a few unique challenges for this loss function. First, CE loss assumes that for any given input, the only possible output is the one available as the ground truth in the training dataset. In general, this is not true for any task, as there can be multiple semantically equivalent sentences, each with a different surface form. This problem gets exaggerated further for the dialog generation task, as there can be multiple valid responses (for a given context) that not only have different surface forms but are also not semantically equivalent. Second, CE loss does not take the context into consideration while processing the response and, hence, it treats all ground truths with equal importance irrespective of the context. But, we may want our final agent to avoid certain classes of responses (e.g. bland, non-informative or biased responses) and give relatively higher weightage for more context-specific responses. To circumvent these shortcomings of the CE loss, in this paper, we propose a novel loss function, CORAL, that directly optimizes recently proposed estimates of human preference for generated responses. Using CORAL, we can train dialog generation models without assuming non-existence of response other than the ground-truth. Also, the CORAL loss is computed based on both the context and the response. Extensive comparisons on two benchmark datasets show that the proposed methods outperform strong state-of-the-art baseline models of different sizes.
translated by 谷歌翻译
为了向农业产业提供基础设施,需要利用先进的技术,如大数据,云和物联网(物联网);智能农业是一个管理概念,专注于提供跟踪,监控,自动化和分析操作所需的基础设施。要代表从收集的主要数据中提取的知识最重要。本研究介绍了智能农业系统的农业科学框架。知识图表被表示为捕获和执行时空农业数据的推理的格子。
translated by 谷歌翻译
本文探讨了超线性增长趋势的环境影响,从整体角度来看,跨越数据,算法和系统硬件。我们通过在行业规模机器学习用例中检查模型开发周期来表征AI计算的碳足迹,同时考虑系统硬件的生命周期。进一步迈出一步,我们捕获AI计算的操作和制造碳足迹,并为硬件 - 软件设计和尺度优化的结束分析以及如何帮助降低AI的整体碳足迹。根据行业经验和经验教训,我们分享关键挑战,并在AI的许多方面上绘制了重要的发展方向。我们希望本文提出的关键信息和见解能够激发社区以环保的方式推进AI领域。
translated by 谷歌翻译
联合学习(FL)可以通过各种不同远程数据源的机器学习模型的分布式计算,而无需将任何单独的数据传输到集中位置。这导致改进的模型的完全性,并且随着更多来源和较大的数据集被添加到联合中的计算和计算的有效缩放。然而,最近的成员攻击表明,当模型参数或摘要统计数据与中央站点共享时,有时可以泄露或推断出私有或敏感的个人数据,需要改进的安全解决方案。在这项工作中,我们提出了一种使用全同性全相治(FHE)的安全FL框架。具体而言,我们使用CKKS构造,近似浮点兼容方案,这些方案受益于密文包装和重新扫描。在我们对大型脑MRI数据集的评估中,我们使用建议的安全流动框架来培训深度学习模型,以预测分布式MRI扫描的一个人的年龄,一个共同的基准测试任务,并证明在学习表现中没有降级在加密和非加密的联合模型之间。
translated by 谷歌翻译
视觉搜索是一项普遍存在的,通常挑战日常任务,是通过寻找家中的汽车钥匙或在人群中的朋友。一些经典搜索任务的有趣性属性是一种不对称性,使得在分散的人B中找到目标A可以比找到A中的B.为了阐明对视觉搜索中的不对称负责的机制,我们提出了一种占据目标的计算模型和将搜索图像作为输入,并在找到目标之前产生一系列眼睛移动。该模型将偏心依赖性视觉识别与目标相关的自上而下的提示集成在一起。我们将六种范式搜索任务中的人类行为与人类显示不对称的案式进行比较。如果没有先前接触刺激或任务特定的培训,则该模型提供了搜索不对称的合理机制。我们假设搜索不对称的极性来自自然环境的经验。我们通过培训模型在想象中的增强版本的模型进行测试,其中自然图像的偏差被移除或逆转。根据训练协议,搜索不对称的极性消失或被改变。本研究强调了神经网络模型可以出现古典感知特性如何,而无需特定于任务培训,而是由于馈送到模型的发育饮食的统计特性。所有源代码和数据都在https://github.com/kreimanlab/visualsearchaseSearmmetry上公开使用。
translated by 谷歌翻译
Existing federated classification algorithms typically assume the local annotations at every client cover the same set of classes. In this paper, we aim to lift such an assumption and focus on a more general yet practical non-IID setting where every client can work on non-identical and even disjoint sets of classes (i.e., client-exclusive classes), and the clients have a common goal which is to build a global classification model to identify the union of these classes. Such heterogeneity in client class sets poses a new challenge: how to ensure different clients are operating in the same latent space so as to avoid the drift after aggregation? We observe that the classes can be described in natural languages (i.e., class names) and these names are typically safe to share with all parties. Thus, we formulate the classification problem as a matching process between data representations and class representations and break the classification model into a data encoder and a label encoder. We leverage the natural-language class names as the common ground to anchor the class representations in the label encoder. In each iteration, the label encoder updates the class representations and regulates the data representations through matching. We further use the updated class representations at each round to annotate data samples for locally-unaware classes according to similarity and distill knowledge to local models. Extensive experiments on four real-world datasets show that the proposed method can outperform various classical and state-of-the-art federated learning methods designed for learning with non-IID data.
translated by 谷歌翻译
The rise in data has led to the need for dimension reduction techniques, especially in the area of non-scalar variables, including time series, natural language processing, and computer vision. In this paper, we specifically investigate dimension reduction for time series through functional data analysis. Current methods for dimension reduction in functional data are functional principal component analysis and functional autoencoders, which are limited to linear mappings or scalar representations for the time series, which is inefficient. In real data applications, the nature of the data is much more complex. We propose a non-linear function-on-function approach, which consists of a functional encoder and a functional decoder, that uses continuous hidden layers consisting of continuous neurons to learn the structure inherent in functional data, which addresses the aforementioned concerns in the existing approaches. Our approach gives a low dimension latent representation by reducing the number of functional features as well as the timepoints at which the functions are observed. The effectiveness of the proposed model is demonstrated through multiple simulations and real data examples.
translated by 谷歌翻译